21 research outputs found
Spiking Neural Networks -- Part III: Neuromorphic Communications
Synergies between wireless communications and artificial intelligence are
increasingly motivating research at the intersection of the two fields. On the
one hand, the presence of more and more wirelessly connected devices, each with
its own data, is driving efforts to export advances in machine learning (ML)
from high performance computing facilities, where information is stored and
processed in a single location, to distributed, privacy-minded, processing at
the end user. On the other hand, ML can address algorithm and model deficits in
the optimization of communication protocols. However, implementing ML models
for learning and inference on battery-powered devices that are connected via
bandwidth-constrained channels remains challenging. This paper explores two
ways in which Spiking Neural Networks (SNNs) can help address these open
problems. First, we discuss federated learning for the distributed training of
SNNs, and then describe the integration of neuromorphic sensing, SNNs, and
impulse radio technologies for low-power remote inference.Comment: Submitte
BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian Learning
Artificial Neural Network (ANN)-based inference on battery-powered devices
can be made more energy-efficient by restricting the synaptic weights to be
binary, hence eliminating the need to perform multiplications. An alternative,
emerging, approach relies on the use of Spiking Neural Networks (SNNs),
biologically inspired, dynamic, event-driven models that enhance energy
efficiency via the use of binary, sparse, activations. In this paper, an SNN
model is introduced that combines the benefits of temporally sparse binary
activations and of binary weights. Two learning rules are derived, the first
based on the combination of straight-through and surrogate gradient techniques,
and the second based on a Bayesian paradigm. Experiments validate the
performance loss with respect to full-precision implementations, and
demonstrate the advantage of the Bayesian paradigm in terms of accuracy and
calibration.Comment: Submitte
Spiking Neural Networks -- Part I: Detecting Spatial Patterns
Spiking Neural Networks (SNNs) are biologically inspired machine learning
models that build on dynamic neuronal models processing binary and sparse
spiking signals in an event-driven, online, fashion. SNNs can be implemented on
neuromorphic computing platforms that are emerging as energy-efficient
co-processors for learning and inference. This is the first of a series of
three papers that introduce SNNs to an audience of engineers by focusing on
models, algorithms, and applications. In this first paper, we first cover
neural models used for conventional Artificial Neural Networks (ANNs) and SNNs.
Then, we review learning algorithms and applications for SNNs that aim at
mimicking the functionality of ANNs by detecting or generating spatial patterns
in rate-encoded spiking signals. We specifically discuss ANN-to-SNN conversion
and neural sampling. Finally, we validate the capabilities of SNNs for
detecting and generating spatial patterns through experiments.Comment: Submitte
Bayesian Continual Learning via Spiking Neural Networks
Among the main features of biological intelligence are energy efficiency,
capacity for continual adaptation, and risk management via uncertainty
quantification. Neuromorphic engineering has been thus far mostly driven by the
goal of implementing energy-efficient machines that take inspiration from the
time-based computing paradigm of biological brains. In this paper, we take
steps towards the design of neuromorphic systems that are capable of adaptation
to changing learning tasks, while producing well-calibrated uncertainty
quantification estimates. To this end, we derive online learning rules for
spiking neural networks (SNNs) within a Bayesian continual learning framework.
In it, each synaptic weight is represented by parameters that quantify the
current epistemic uncertainty resulting from prior knowledge and observed data.
The proposed online rules update the distribution parameters in a streaming
fashion as data are observed. We instantiate the proposed approach for both
real-valued and binary synaptic weights. Experimental results using Intel's
Lava platform show the merits of Bayesian over frequentist learning in terms of
capacity for adaptation and uncertainty quantification.Comment: Accepted for publication in Frontiers in Computational Neuroscienc
Identifying neural substrates of competitive interactions and sequence transitions during mechanosensory responses in Drosophila.
Nervous systems have the ability to select appropriate actions and action sequences in response to sensory cues. The circuit mechanisms by which nervous systems achieve choice, stability and transitions between behaviors are still incompletely understood. To identify neurons and brain areas involved in controlling these processes, we combined a large-scale neuronal inactivation screen with automated action detection in response to a mechanosensory cue in Drosophila larva. We analyzed behaviors from 2.9x105 larvae and identified 66 candidate lines for mechanosensory responses out of which 25 for competitive interactions between actions. We further characterize in detail the neurons in these lines and analyzed their connectivity using electron microscopy. We found the neurons in the mechanosensory network are located in different regions of the nervous system consistent with a distributed model of sensorimotor decision-making. These findings provide the basis for understanding how selection and transition between behaviors are controlled by the nervous system